In the absence of reliable and accurate GPS, visual odometry (VO) has emergedas an effective means of estimating the egomotion of robotic vehicles. Like anydead-reckoning technique, VO suffers from unbounded accumulation of drift errorover time, but this accumulation can be limited by incorporating absoluteorientation information from, for example, a sun sensor. In this paper, weleverage recent work on visual outdoor illumination estimation to show thatestimation error in a stereo VO pipeline can be reduced by inferring the sunposition from the same image stream used to compute VO, thereby gaining thebenefits of sun sensing without requiring a dedicated sun sensor or the sun tobe visible to the camera. We compare sun estimation methods based onhand-crafted visual cues and Convolutional Neural Networks (CNNs) anddemonstrate our approach on a combined 7.8 km of urban driving from the popularKITTI dataset, achieving up to a 43% reduction in translational average rootmean squared error (ARMSE) and a 59% reduction in final translational drifterror compared to pure VO alone.
展开▼